165 research outputs found
Compositional Correctness and Completeness for Symbolic Partial Order Reduction
Partial Order Reduction (POR) and Symbolic Execution (SE) are two fundamental abstraction techniques in program analysis. SE is particularly useful as a state abstraction technique for sequential programs, while POR addresses equivalent interleavings in the execution of concurrent programs. Recently, several promising connections between these two approaches have been investigated, which result in symbolic partial order reduction: partial order reduction of symbolically executed programs. In this work, we provide compositional notions of completeness and correctness for symbolic partial order reduction. We formalize completeness and correctness for (1) abstraction over program states and (2) trace equivalence, such that the abstraction gives rise to a complete and correct SE, the trace equivalence gives rise to a complete and correct POR, and their combination results in complete and correct symbolic partial order reduction. We develop our results for a core parallel imperative programming language and mechanize the proofs in Coq
An Analysis Tool for Models of Virtualized Systems
This paper gives an example-driven introduction to modelling and analyzing virtualized systems in, e.g., cloud computing, using virtually timed ambients, a process algebra developed to study timing aspects of resource management for (nested) virtual machines. The calculus supports nested virtualization and virtual machines compete with other processes for the resources of their host environment. Resource provisioning in virtually timed ambients extends the capabilities of mobile ambients to model the dynamic creation, migration, and destruction of virtual machines. Quality of service properties for virtually timed ambients can be formally expressed using modal contracts describing aspects of resource provisioning and verified using a model checker for virtually timed ambients, implemented in the rewriting system Maude
AFSD: Adaptive Feature Space Distillation for Distributed Deep Learning
We propose a novel and adaptive feature space distillation method (AFSD) to reduce the communication overhead among distributed computers. The proposed method improves the Codistillation process by supporting longer update interval rates. AFSD performs knowledge distillates across the models infrequently and provides flexibility to the models in terms of exploring diverse variations in the training process. We perform knowledge distillation in terms of sharing the feature space instead of output only. Therefore, we also propose a new loss function for the Codistillation technique in AFSD. Using the feature space leads to more efficient knowledge transfer between models with a longer update interval rates. In our method, the models can achieve the same accuracy as Allreduce and Codistillation with fewer epochs
Modeling and Simulation of Spark Streaming
As more and more devices connect to Internet of Things, unbounded streams of
data will be generated, which have to be processed "on the fly" in order to
trigger automated actions and deliver real-time services. Spark Streaming is a
popular realtime stream processing framework. To make efficient use of Spark
Streaming and achieve stable stream processing, it requires a careful interplay
between different parameter configurations. Mistakes may lead to significant
resource overprovisioning and bad performance. To alleviate such issues, this
paper develops an executable and configurable model named SSP (stands for Spark
Streaming Processing) to model and simulate Spark Streaming. SSP is written in
ABS, which is a formal, executable, and object-oriented language for modeling
distributed systems by means of concurrent object groups. SSP allows users to
rapidly evaluate and compare different parameter configurations without
deploying their applications on a cluster/cloud. The simulation results show
that SSP is able to mimic Spark Streaming in different scenarios.Comment: 7 pages and 13 figures. This paper is published in IEEE 32nd
International Conference on Advanced Information Networking and Applications
(AINA 2018
Godot: All the Benefits of Implicit and Explicit Futures (Artifact)
This artifact contains an implementation of data-flow futures in terms of control-flow futures, in the Scala language. In the implementation, we show microbenchmarks that solve the three identified problems from the paper:
1) The Type Proliferation Problem,
2) The Fulfilment Observation Problem, and
3) The Future Proliferation Problem
There are also detailed instructions on design decisions that differ from the formal semantics and restrictions on the limits of how much can be encoded in the Scala language. We provide examples, e.g., creation of a proxy service using data-flow futures, as well as tests that exercise different parts of the type system
Lazy Product Discovery in Huge Configuration Spaces
Highly-configurable software systems can have thousands of interdependent
configuration options across different subsystems. In the resulting
configuration space, discovering a valid product configuration for some
selected options can be complex and error prone. The configuration space can be
organized using a feature model, fragmented into smaller interdependent feature
models reflecting the configuration options of each subsystem.
We propose a method for lazy product discovery in large fragmented feature
models with interdependent features. We formalize the method and prove its
soundness and completeness. The evaluation explores an industrial-size
configuration space. The results show that lazy product discovery has
significant performance benefits compared to standard product discovery, which
in contrast to our method requires all fragments to be composed to analyze the
feature model. Furthermore, the method succeeds when more efficient,
heuristics-based engines fail to find a valid configuration
- …